Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Quantum algorithms will likely play a key role in future high-performance-computing (HPC) environments. These algorithms are typically expressed as quantum circuits composed of arbitrary gates or as unitary matrices. Executing these on physical devices, however, requires translation to device-compatible circuits, in a process called quantum compilation or circuit synthesis, since these devices support a limited number of native gates. Moreover, these devices typically have specific qubit topologies, which constrain how and where gates can be applied. Consequently, logical qubits in input circuits and unitaries may need to be mapped to and routed between physical qubits. Furthermore, current Noisy Intermediate-Scale Quantum (NISQ) devices present additional constraints. They are vulnerable to errors during gate application and their short decoherence times lead to qubits rapidly succumbing to accumulated noise and possibly corrupting computations. Therefore, circuits synthesized for NISQ devices need to minimize gates and execution times. The problem of synthesizing device-compatible circuits, while optimizing for low gate count and short execution times, can be shown to be computationally intractable using analytical methods. Therefore, interest has grown towards heuristics-based synthesis techniques, which are able to produce approximations of the desired algorithm, while optimizing depth and gate-count. In this work, we investigate using genetic algorithms (GA)—a proven gradient-free optimization technique based on natural selection—for circuit synthesis. In particular, we formulate the quantum synthesis problem as a multi-objective optimization (MOO) problem, with the objectives of minimizing the approximation error, number of multi-qubit gates, and circuit depth. We also employ fuzzy logic for runtime parameter adaptation of GA to enhance search efficiency and solution quality in our proposed method.more » « lessFree, publicly-accessible full text available April 1, 2026
- 
            Quantum computing has the potential to solve certain compute-intensive problems faster than classical computing by leveraging the quantum mechanical properties of superposition and entanglement. This capability can be particularly useful for solving Partial Differential Equations (PDEs), which are challenging to solve even for High-Performance Computing (HPC) systems, especially for multidimensional PDEs. This led researchers to investigate the usage of Quantum-Centric High-Performance Computing (QC-HPC) to solve multidimensional PDEs for various applications. However, the current quantum computing-based PDE-solvers, especially those based on Variational Quantum Algorithms (VQAs) suffer from limitations, such as low accuracy, long execution times, and limited scalability. In this work, we propose an innovative algorithm to solve multidimensional PDEs with two variants. The first variant uses Finite Difference Method (FDM), Classical-to-Quantum (C2Q) encoding, and numerical instantiation, whereas the second variant utilizes FDM, C2Q encoding, and Column-by-Column Decomposition (CCD). We evaluated the proposed algorithm using the Poisson equation as a case study and validated it through experiments conducted on noise-free and noisy simulators, as well as hardware emulators and real quantum hardware from IBM. Our results show higher accuracy, improved scalability, and faster execution times in comparison to variational-based PDE-solvers, demonstrating the advantage of our approach for solving multidimensional PDEs.more » « lessFree, publicly-accessible full text available March 1, 2026
- 
            Quantum computing (QC) has opened the door to advancements in machine learning (ML) tasks that are currently implemented in the classical domain. Convolutional neural networks (CNNs) are classical ML architectures that exploit data locality and possess a simpler structure than a fully connected multi-layer perceptrons (MLPs) without compromising the accuracy of classification. However, the concept of preserving data locality is usually overlooked in the existing quantum counterparts of CNNs, particularly for extracting multifeatures in multidimensional data. In this paper, we present an multidimensional quantum convolutional classifier (MQCC) that performs multidimensional and multifeature quantum convolution with average and Euclidean pooling, thus adapting the CNN structure to a variational quantum algorithm (VQA). The experimental work was conducted using multidimensional data to validate the correctness and demonstrate the scalability of the proposed method utilizing both noisy and noise-free quantum simulations. We evaluated the MQCC model with reference to reported work on state-of-the-art quantum simulators from IBM Quantum and Xanadu using a variety of standard ML datasets. The experimental results show the favorable characteristics of our proposed techniques compared with existing work with respect to a number of quantitative metrics, such as the number of training parameters, cross-entropy loss, classification accuracy, circuit depth, and quantum gate count.more » « less
- 
            There is a growing need for digital and power electronics to deliver higher power for applications in batteries for electric vehicles, energy sources from wind and solar, data centers, and microwave devices. The higher power also generates more heat, which requires better thermal management. Diamond thin films and substrates are attractive for thermal management applications in power electronics because of their high thermal conductivity. However, deposition of diamond by microwave plasma enhanced chemical vapor deposition (MPECVD) requires high temperatures, which can degrade metallization used in power electronic devices. In this research, titanium (Ti)–aluminum (Al) thin films were deposited by DC magnetron sputtering on p-type Si (100) substrates using a physical mask for creating dot patterns for measuring the properties of the contact metallization. The influence of processing conditions and postdeposition annealing in argon (Ar) and hydrogen (H2) at 380 °C for 1 h on the properties of the contact metallization is studied by measuring the I-V characteristics and Hall effect. The results indicated a nonlinear response for the as-deposited films and linear ohmic contact resistance after postannealing treatments. In addition, the results on contact resistance, resistivity, carrier concentration, and Hall mobility of wafers extracted from Ti–Al metal contact to Si (100) are presented and discussed.more » « less
- 
            Convolutional neural networks (CNNs) have proven to be a very efficient class of machine learning (ML) architectures for handling multidimensional data by maintaining data locality, especially in the field of computer vision. Data pooling, a major component of CNNs, plays a crucial role in extracting important features of the input data and downsampling its dimensionality. Multidimensional pooling, however, is not efficiently implemented in existing ML algorithms. In particular, quantum machine learning (QML) algorithms have a tendency to ignore data locality for higher dimensions by representing/flattening multidimensional data as simple one-dimensional data. In this work, we propose using the quantum Haar transform (QHT) and quantum partial measurement for performing generalized pooling operations on multidimensional data. We present the corresponding decoherence-optimized quantum circuits for the proposed techniques along with their theoretical circuit depth analysis. Our experimental work was conducted using multidimensional data, ranging from 1-D audio data to 2-D image data to 3-D hyperspectral data, to demonstrate the scalability of the proposed methods. In our experiments, we utilized both noisy and noise-free quantum simulations on a state-of-the-art quantum simulator from IBM Quantum. We also show the efficiency of our proposed techniques for multidimensional data by reporting the fidelity of results.more » « less
- 
            Abstract The use of transmission electron microscopy (TEM) to observe real-time structural and compositional changes has proven to be a valuable tool for understanding the dynamic behavior of nanomaterials. However, identifying the nanoparticles of interest typically require an obvious change in position, size, or structure, as compositional changes may not be noticeable during the experiment. Oxidation or reduction can often result in subtle volume changes only, so elucidating mechanisms in real-time requires atomic-scale resolution orin-situelectron energy loss spectroscopy, which may not be widely accessible. Here, by monitoring the evolution of diffraction contrast, we can observe both structural and compositional changes in iron oxide nanoparticles, specifically the oxidation from a wüstite-magnetite (FeO@Fe3O4) core–shell nanoparticle to single crystalline magnetite, Fe3O4nanoparticle. Thein-situTEM images reveal a distinctive light and dark contrast known as the ‘Ashby-Brown contrast’, which is a result of coherent strain across the core–shell interface. As the nanoparticles fully oxidize to Fe3O4, the diffraction contrast evolves and then disappears completely, which is then confirmed by modeling and simulation of TEM images. This represents a new, simplified approach to tracking the oxidation or reduction mechanisms of nanoparticles usingin-situTEM experiments.more » « less
- 
            Assessing set-membership and evaluating distances to the related set boundary are problems of widespread interest, and can often be computationally challenging. Seeking efficient learning models for such tasks, this paper deals with voltage stability margin prediction for power systems. Supervised training of such models is conventionally hard due to high-dimensional feature space, and a cumbersome label-generation process. Nevertheless, one may find related easy auxiliary tasks, such as voltage stability verification, that can aid in training for the hard task. This paper develops a novel approach for such settings by leveraging transfer learning. A Gaussian process-based learning model is efficiently trained using learning- and physics-based auxiliary tasks. Numerical tests demonstrate markedly improved performance that is harnessed alongside the benefit of uncertainty quantification to suit the needs of the considered application.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
